The Journal of Neuroscience
● Society for Neuroscience
All preprints, ranked by how well they match The Journal of Neuroscience's content profile, based on 928 papers previously published here. The average preprint has a 0.47% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Walia, A.; Ortmann, A. J.; Lefler, S.; Holden, T.; Herzog, J. A.; Buchman, C. A.
Show abstract
The cochleas capacity to decode sound frequencies is enhanced by a unique structural arrangement along its longitudinal axis, a feature termed tonotopy or place coding. Auditory hair cells at the cochleas base are activated by high-frequency sounds, while those at the apex respond to lower frequencies. Presently, our understanding of tonotopy primarily hinges on electrophysiological, mechanical, and anatomical studies conducted in animals or human cadavers. However, direct in vivo measurements of tonotopy in humans have been elusive due to the invasive nature of these procedures. This absence of live human data has posed an obstacle in establishing an accurate tonotopic map for patients, potentially limiting advancements in cochlear implant and hearing enhancement technologies. In this study, we conducted acoustically-evoked intracochlear recordings in 50 human subjects using a longitudinal multi-electrode array. These electrophysiological measures, combined with postoperative imaging to accurately locate the electrode contacts allow us to create the first in vivo tonotopic map of the human cochlea. Furthermore, we examined the influences of sound intensity, electrode array presence, and the creation of an artificial third window on the tonotopic map. Our findings reveal a significant disparity between the tonotopic map at daily speech conversational levels and the conventional (i.e., Greenwood) map derived at close-to-threshold levels. Our findings have implications for advancing cochlear implant and hearing augmentation technologies, but also offer novel insights into future investigations into auditory disorders, speech processing, language development, age-related hearing loss, and could potentially inform more effective educational and communication strategies for those with hearing impairments. Significance StatementThe ability to discriminate sound frequencies, or pitch, is vital for communication and facilitated by a unique arrangement of cells along the cochlear spiral (tonotopic place). While earlier studies have provided insight into frequency selectivity based on animal and human cadaver studies, our understanding of the in vivo human cochlea remains limited. Our research offers, for the first time, in vivo electrophysiological evidence from humans, detailing the tonotopic organization of the human cochlea. We demonstrate that the functional arrangement in humans significantly deviates from the conventional Greenwood function, with the operating point of the in vivo tonotopic map showing a basal (or frequency downward) shift. This pivotal finding could have far-reaching implications for the study and treatment of auditory disorders.
Synigal, S. R.; Anderson, A. J.; Lalor, E. C.
Show abstract
The past few years have seen an increase in the use of encoding models to explain neural responses to natural speech. The goal of these models is to characterize how the human brain converts acoustic energy into distinct linguistic representations that enable everyday speech comprehension. For example, researchers have shown that electroencephalography (EEG) data can be modeled in terms of acoustic features of speech, such as its amplitude envelope or spectrogram, linguistic features such as phonemes and phoneme probability, and higher-level linguistic features like context-based word predictability. However, it is unclear how reliably EEG indices of these speech feature representations reflect comprehension in different listening conditions. To address this, we recorded EEG from neurotypical adults who listened to segments of an audiobook in various levels of background noise. We modeled how their EEG responses reflected a range of acoustic and linguistic speech features and how this tracking varied with behavior across noise levels. EEG tracking of nearly all examined features showed SNR-dependent changes in unique variance explained, with the largest changes occurring for linguistic features. We hypothesized that only higher-level feature tracking would predict behavior but instead found that both high and low-level features were associated with behavioral scores depending on the noise level. EEG markers of the influence of top-down, context-based prediction on bottom-up acoustic processing also correlated with behavior. These findings help characterize the relationship between brain and behavior by comprehensively linking hierarchical indices of neural speech processing to language comprehension metrics. SIGNIFICANCE STATEMENTAcoustic and linguistic features of speech have been shown to be consistently tracked by neural activity even in noisy conditions. However, it is unclear how signatures of low- and high-level features covary with one another and relate to behavior across these listening conditions. Here, we find that linguistic (phonetic feature and word probability-based feature) processing is affected by noise more than low-level acoustic feature processing. We also find that behavioral performance is associated with acoustic, phonetic, and lexical surprisal tracking, and that these associations depend on background noise levels. These results extend our understanding of how various speech features are comparatively reflected in electrical brain activity and how they relate to perception in challenging listening conditions.
Gorman, D.; Wong, N. F.; Schupbach, C. W.; DiCenso, S. L.; Xu-Friedman, S. C.; Boergens, K. M.; Lauer, A. M.; Salles, A.; Xu-Friedman, M. A.
Show abstract
Moderate noise exposure is a common experience, yet its impact on central auditory synapses remains poorly understood. We study this issue at the first synapses in the central auditory pathway formed by auditory nerve afferents onto bushy cells in the cochlear nucleus, called endbulbs of Held. Non-traumatic noise exposure alters endbulb properties, decreasing the probability of vesicle release and enlarging the pool of releasable vesicles as assessed using electrophysiological methods and immunolabelling. These changes appear homeostatic, to maintain synaptic efficacy during periods of high activity. To identify structural changes underlying the larger vesicle pool, we used serial blockface electron microscopy of endbulbs from control and noise-exposed mice to quantitatively assess synaptic morphology. We observed no differences in the juxtapositional area between endbulbs and bushy cells, nor in the number or density of active zones and postsynaptic densities. Images of endbulb terminals were significantly darker after noise exposure, indicating an increase in the density of synaptic vesicles. These results suggest that moderate noise exposure induces an activity-dependent increase in presynaptic vesicle numbers, consistent with the observed physiological changes in neurotransmitter release. This work sets the stage for high-resolution studies to quantify docked and reserve vesicles. Significance statementNoise exposure is a fact of everyday life, and it is important to understand how noise affects function in the auditory pathway in the brain to understand the full consequences of noise exposure. Electrophysiological experiments indicate that noise triggers a homeostatic increase in the releasable pool of vesicles at auditory nerve synapses. We examined the cellular basis for this change using serial blockface electron microscopy of auditory nerve synapses with and without noise exposure. We reconstructed a number of bushy cells and their presynaptic auditory nerve terminals. After noise exposure, there was no significant increase in the area of synaptic contact or the number or density of synaptic release sites. There was an increase in the number of vesicles near release sites, which may account for the physiological changes. These results emphasize the importance of detailed anatomical studies to study the effects of noise exposure and thus determine the best mechanistic approach for therapies and treatments of noise-induced hearing loss.
Avila, E.; Flierman, N.; Holland, P. J.; Roelfsema, P. J.; Frens, M. A.; Badura, A.; De Zeeuw, C. I.
Show abstract
Volitional suppression of responses to distracting external stimuli enables us to achieve our goals. This volitional inhibition of a specific behavior is supposed to be mainly mediated by the cerebral cortex. However, recent evidence supports the involvement of the cerebellum in this process. It is currently not known whether different parts of the cerebellar cortex play differential or synergistic roles in planning and execution of this behavior. Here, we measured Purkinje cell (PC) responses in the medial and lateral cerebellum in two rhesus macaques during a pro- and antisaccade task. During an antisaccade trial, non-human primates were instructed to make a saccadic eye movement away from a target, rather than towards it, as in prosaccade trials. Our data shows that the cerebellum plays an important role not only during execution of the saccades, but also during the volitional inhibition of eye movements towards the target. Simple Spike (SS) modulation during the instruction and execution period of pro- and antisaccades was prominent in PCs of both medial and lateral cerebellum. However, only the SS activity in the lateral cerebellar cortex contained information about trial identity and showed a stronger reciprocal interaction with complex spikes. Moreover, SS activity of different PC groups modulated bidirectionally in both regions, but the PCs that showed facilitating and suppressive activity were predominantly associated with instruction and execution, respectively. These findings show that different cerebellar regions and PC groups contribute to goal-directed behavior and volitional inhibition, but with different propensities, highlighting the rich repertoire of cerebellar control in executive functions. Significance StatementThe antisaccade task is commonly used in research and clinical evaluation as a test of volitional and flexible control of behavior. It requires volitional suppression of prosaccades, a function that has been attributed to the neocortex. However, recent findings indicate that cerebellum also contributes to this behavior. We recorded from neurons in the medial and lateral cerebellum to evaluate their responses in this task. We found that both regions significantly modulated their activity during this task, but only cells in the lateral cerebellum encoded the stimulus identity in each trial. These results indicate that the cerebellum actively contributes to the control of flexible behavior and that lateral and medial cerebellum play different roles during volitional eye movements.
Kaneko, M.; Hoseini, M. S.; Waschek, J. A.; Stryker, M. P.
Show abstract
When adult mice are repeatedly exposed to a particular visual stimulus for as little as one hour per day for several days while their visual cortex (V1) is in the high-gain state produced by locomotion, that specific stimulus elicits much stronger responses in V1 neurons for the following several weeks, even when measured in anesthetized animals. Such stimulus-specific enhancement (SSE) is not seen if locomotion is prevented. The effect of locomotion on cortical responses is mediated by vasoactive intestinal peptide (VIP) positive interneurons, which can release both the peptide and the inhibitory neurotransmitter GABA. Here we used genetic ablation to determine which of those molecules secreted by VIP-ergic neurons is responsible for SSE. SSE was not impaired by VIP deletion but was prevented by compromising release of GABA from VIP cells. This finding suggests that SSE may result from Hebbian mechanisms that remain present in adult V1. SIGNIFICANCEMany neurons package and release a peptide along with a conventional neurotransmitter. The conventional view is that such peptides exert late, slow effects on plasticity. We studied a form of cortical plasticity that depends on the activity of neurons that express both vasoactive intestinal peptide (VIP) and the inhibitory neurotransmitter GABA. GABA release accounted for their action on plasticity, with no effect of deleting the peptide on this phenomenon.
Yarbrough, J. B.; Shi, L.; Chattopadhyay, K.; Knight, R. T.; Johnson, E. L.
Show abstract
Working memory (WM) enables the detection of mistakes by permitting one to notice when sensory input is mismatched to their internal prediction. Prior studies support the role of frontal midline theta activity, with an overlapping N200 event-related potential (ERP), as a mechanism for comparing incoming sensory stimuli to the internal model. Additionally, posterior low-beta activity has been proposed as a mechanism for processing incoming sensory stimuli in WM. However, it is unknown whether frontal midline theta activity and the N200 support mismatch detection, or whether posterior low-beta activity extends from sensory processing to detecting a mismatch between sensory input and the internal model. Here, we reveal that frontal midline theta supports mismatch detection and explains individual WM performance. Unexpectedly, instead of the N200, results show a positive slow wave ERP overlapping with the frontal midline theta mismatch response. Results additionally indicate a late posterior low-beta response persisting from stimulus presentation into the post-stimulus delay. Our findings establish frontal midline theta as a marker of successful mismatch detection, challenge the domain-general role of the N200 in error detection, and support theories linking posterior low-beta to processing incoming sensory stimuli.
Kobrossi, J. P.; Dooley, J. C.
Show abstract
REM sleep is composed of two substates--phasic and tonic--that differ in their behavioral, sensory, and electrophysiological features. Although these substates are well characterized in adults, their developmental trajectory remains unclear. Here, we examined the development of tonic and phasic REM in rats from postnatal day (P) 12-24, spanning a period of rapid corticothalamic development. We recorded local field potentials and single units from primary motor cortex (M1), together with high-speed video and electromyographic recordings of the nuchal muscle. Periods of behavioral quiescence along with high delta power indicated NREM sleep, whereas periods of sustained muscle atonia and low delta power indicated REM sleep. At P16, M1 theta oscillations first appeared, and the delay to the first twitch increased, revealing the start of a distinct twitch-free portion of REM sleep. Motivated by this, we divided REM sleep into phasic and tonic periods, with and without twitching, respectively. Spiking activity and gamma power were consistently higher during phasic REM. At P20, phasic REM also showed faster theta oscillations than tonic REM. At P24, tonic REM was accompanied by a distinct alpha oscillation. These results show that the features distinguishing the two REM substates appear sequentially across development, revealing a progressive differentiation of REM sleep into tonic and phasic periods, a developmental refinement that may support increasingly complex forms of sleep-dependent plasticity. Significance StatementInfancy is marked by rapid circuit formation and by the dominance of REM sleep, a state thought to support early neural development. Yet sleep itself undergoes massive changes throughout this period, and the developmental timeline of REM sleep remains poorly understood. Using neural recordings and high-speed video from developing infant rats, we show that REM sleep gradually divides into two substates--tonic and phasic--across infancy. These substates exhibit age-dependent differences in movement, neural firing, and cortical oscillations, revealing increasing microstructural complexity in REM sleep. Our findings identify when these substates first emerge and how their defining features unfold over time, providing a framework for understanding how REM sleep supports developmental plasticity throughout early life.
Bigelow, J.; Morrill, R. J.; Olsen, T.; Bazarini, S. N.; Hasenstaub, A. R.
Show abstract
Recent studies have established significant anatomical and functional connections between visual areas and primary auditory cortex (A1), which may be important for perceptual processes such as communication and spatial perception. However, much remains unknown about the microcircuit structure of these interactions, including how visual context may affect different cell types across cortical layers, each with diverse responses to sound. The present study examined activity in putative excitatory and inhibitory neurons across cortical layers of A1 in awake male and female mice during auditory, visual, and audiovisual stimulation. We observed a subpopulation of A1 neurons responsive to visual stimuli alone, which were overwhelmingly found in the deep cortical layers and included both excitatory and inhibitory cells. Other neurons for which responses to sound were modulated by visual context were similarly excitatory or inhibitory but were less concentrated within the deepest cortical layers. Important distinctions in visual context sensitivity were observed among different spike rate and timing responses to sound. Spike rate responses were themselves heterogeneous, with stronger responses evoked by sound alone at stimulus onset, but greater sensitivity to visual context by sustained firing activity following transient onset responses. Minimal overlap was observed between units with visual-modulated firing rate responses and spectrotemporal receptive fields (STRFs) which are sensitive to both spike rate and timing changes. Together, our results suggest visual information in A1 is predominantly carried by deep layer inputs and influences sound encoding across cortical layers, and that these influences independently impact qualitatively distinct responses to sound. Significance statementMultisensory integration is ubiquitous throughout the brain, including primary sensory cortices. The present study examined visual responses in primary auditory cortex, which were found in both putative excitatory and inhibitory neurons and concentrated in the deep cortical layers. Visual-modulated responses to sound were similarly observed in excitatory and inhibitory neurons but were more evenly distributed throughout cortical layers. Visual modulation moreover differed substantially across distinct sound response types. Transient stimulus onset spike rate changes were far less sensitive to visual context than sustained spike rate changes during the remainder of the stimulus. Spike timing changes were often modulated independently of spike rate changes. Audiovisual integration in auditory cortex is thus diversely expressed among cell types, cortical layers, and response types.
O'Sullivan, A. E.; Crosse, M. J.; Di Liberto, G. M.; de Cheveigne, A.; Lalor, E. C.
Show abstract
Seeing a speakers face benefits speech comprehension, especially in challenging listening conditions. This perceptual benefit is thought to stem from the neural integration of visual and auditory speech at multiple stages of processing, whereby movement of a speakers face provides temporal cues to auditory cortex, and articulatory information from the speakers mouth can aid recognizing specific linguistic units (e.g., phonemes, syllables). However it remains unclear how the integration of these cues varies as a function of listening conditions. Here we sought to provide insight on these questions by examining EEG responses to natural audiovisual, audio, and visual speech in quiet and in noise. Specifically, we represented our speech stimuli in terms of their spectrograms and their phonetic features, and then quantified the strength of the encoding of those features in the EEG using canonical correlation analysis. The encoding of both spectrotemporal and phonetic features was shown to be more robust in audiovisual speech responses then what would have been expected from the summation of the audio and visual speech responses, consistent with the literature on multisensory integration. Furthermore, the strength of this multisensory enhancement was more pronounced at the level of phonetic processing for speech in noise relative to speech in quiet, indicating that listeners rely more on articulatory details from visual speech in challenging listening conditions. These findings support the notion that the integration of audio and visual speech is a flexible, multistage process that adapts to optimize comprehension based on the current listening conditions. Significance StatementDuring conversation, visual cues impact our perception of speech. Integration of auditory and visual speech is thought to occur at multiple stages of speech processing and vary flexibly depending on the listening conditions. Here we examine audiovisual integration at two stages of speech processing using the speech spectrogram and a phonetic representation, and test how audiovisual integration adapts to degraded listening conditions. We find significant integration at both of these stages regardless of listening conditions, and when the speech is noisy, we find enhanced integration at the phonetic stage of processing. These findings provide support for the multistage integration framework and demonstrate its flexibility in terms of a greater reliance on visual articulatory information in challenging listening conditions.
dai, j.; Sun, Q.-Q.
Show abstract
Learning involves evaluating multiple dimensions of information and generating appropriate actions, yet how the brain assigns value to this information remains unclear. In this study, we show that two types of interneurons (INs) in the primary somatosensory cortex--somatostatin-expressing (SST-INs) and parvalbumin-expressing (PV-INs) neurons--differentially contribute to information evaluation during trace eyeblink conditioning (TEC). An air puff (unconditioned stimulus, US) delivered after a whisker stimulus (conditioned stimulus, CS) elicited both reflexive eye closure and stress-related locomotion. However, only self-initiated, anticipatory eye closure during the CS window, measured via electromyography (EMG), was directly relevant to learning performance. We found that SST-IN activity changes aligned with the learning induced changes of the anticipatory eye blinks during the CS period, correlated with the EMG changes across learning. In contrast, PV-IN activity was positively correlated with stress-related locomotion following the US and showed no learning related changes, suggesting a role in processing the emotional or aversive component of the task. Furthermore, cholinergic signaling via nicotinic receptors modulated both SST- and PV-IN activities, in a manner consistent with their distinctive roles, linking these interneurons to the regulation of learning-related actions and emotional responses, respectively. These findings demonstrate that distinct interneuron populations evaluate different dimensions of information--SST-INs for predictive, adaptive actions and PV-INs for stress-related emotional responses--to guide learning and behavior.
Si, Y.; Ito, S.; Litke, A. M.; Feldheim, D. A.
Show abstract
Locating the source of a specific sound in a complex environment and determining its saliency is critical for survival. The superior colliculus (SC), a sensorimotor midbrain structure, plays an important role in sound localization and has been shown to have a topographic map of the auditory space in a range of species. In mice, previous studies using broadband white noise stimuli found that neurons use high-frequency monaural spectral cues and interaural level differences (ILDs) to compute spatially restricted receptive fields (RFs), and that these RFs are organized topographically along the azimuth. However, in a naturalistic environment, the auditory stimuli that an animal encounters may have rich spectral components; however, these sound sources can still be localized efficiently. It remains unknown whether and how the SC neurons respond to naturalistic sounds and, in turn, compute a spatially restricted RF. Here, we show results from large-scale in vivo physiological recordings of SC neurons in response to white noise, naturalistic ultrasonic pup calls and chirps. We find that mouse SC auditory neurons respond to pup calls with distinct temporal patterns and a spatial preference predominantly at [~]60 degrees in contralateral azimuth. In addition, we categorized auditory SC neurons based on their spectrotemporal receptive field patterns and demonstrated that there are at least 4 distinct subtypes of auditory responsive SC neurons. Significance StatementThe superior colliculus (SC) receives visual and auditory information that is used to localize objects. While the organization and composition of visually responsive SC neurons is well described, much less is known about the types and response properties of auditory SC neurons. Here, we presented white noise, ultrasonic mouse pup calls, and chirp stimuli to mice while recording from SC neurons. Analysis of neuronal responses defines 4 distinct classes of auditory neurons. We also show that while auditory neurons respond to naturalistic stimuli, these responses mainly occur when presented from the side but not the front of the animal. These results lead to the hypothesis that mice use different strategies to localize sound depending on the spectral composition of the source.
Nentwich, M.; Leszczynski, M.; Russ, B. E.; Hirsch, L.; Markowitz, N.; Sapru, K.; Schroeder, C. E.; Mehta, A.; Bickel, S.; Parra, L. C.
Show abstract
Our continuous visual experience in daily life is dominated by change. Previous research has focused on visual change due to stimulus motion, eye movements or unfolding events, but not their combined impact across the brain, or their interactions with semantic novelty. We investigate the neural responses to these sources of novelty during film viewing. We analyzed intracranial recordings in humans across 6328 electrodes from 23 individuals. Responses associated with saccades and film cuts were dominant across the entire brain. Film cuts at semantic event boundaries were particularly effective in the temporal and medial temporal lobe. Saccades to visual targets with high visual novelty were also associated with strong neural responses. Specific locations in higher-order association areas showed selectivity to either high or low-novelty saccades. We conclude that neural activity associated with film cuts and eye movements is widespread across the brain and is modulated by semantic novelty.
Criscuolo, A.; Knolle, F.; Schwartze, M.; Schröger, E.; Ivry, R.; Kotz, S. A.
Show abstract
The cerebellum (CE) supports the encoding of the precise sensory event timing and the generation of temporal predictions. Here we investigated whether focal CE lesions impact temporal predictions in a cross-modal context. Individuals with CE lesion (n=9) and healthy-matched controls (HC) were presented with visuo-auditory stimulus pairs, presented in a temporally regular (predictable) or irregular (unpredictable) manner while EEG was recorded. We hypothesized cross-modal temporal predictions to be mediated by pre-stimulus cerebello-cortical beta-band (12-25Hz) activity. In turn, we expected HC, but not CE patients, to show a modulation of pre-stimulus beta power as a function of temporal prediction. HC showed greater pre-stimulus beta-band suppression in anticipation of sound onsets, and stronger post-stimulus delta- and theta-band (1-4Hz; 4-8Hz) power in the predictable than the unpredictable condition. Furthermore, they displayed a significant modulation of pre-stimulus delta-beta cross-frequency coupling as a function of temporal prediction. These effects were not observed in the CE group. Results confirm that cerebellar lesions impair the generation of temporal predictions in cross-modal (visuo-auditory) stimulus processing, extending the role of cerebellar predictive timing from sensorimotor to motor-independent cross-modal perception.
Tovar, D. A.; Noel, J.-P.; Ishizawa, Y.; Patel, S. R.; Eskandar, E.; Wallace, M. T.
Show abstract
The brain is comprised of neural circuits that are able to flexibly represent the complexity of the external world. In accomplishing this feat, one of the first attributes the brain must code for is whether a stimulus is present and subsequently what sensory information that stimulus contains. One of the core characteristics of that information is which sensory modality(ies) are being represented. How information regarding both the presence and modal identity of a given stimulus is represented and transformed within the brain remains poorly understood. In this study, we investigated how the brain represents the presence and modal identity of a given stimulus while tactile, audio, and audio-tactile stimuli were passively presented to non-human primates. We recorded spiking activity from primary somatosensory (S1) and ventral pre-motor (PMv) cortices, two areas known to be instrumental in transforming sensory information into motor commands for action. Using multivariate analyses to decode stimulus presence and identity, we found that information regarding stimulus presence and modal identity were found in both S1 and PMv and extended beyond the duration of significant evoked spiking activity, and that this information followed different time-courses in these two areas. Further, we combined time-generalization decoding with cross-area decoding to demonstrate that while signaling the presence of a stimulus involves a feedforward-feedback coupling between S1-PMv, the processing of modal identity is largely restricted to S1. Together, these results highlight the differing spatiotemporal dynamics of information flow regarding stimulus presence and modal identity in two nodes of an important cortical sensorimotor circuit. Significance StatementIt is unclear how the structure and function of the brain support differing sensory functions, such as detecting the presence of a stimulus in the environment vs. identifying it. Here, we used multivariate decoding methods on monkey neuronal data to track how information regarding stimulus presence and modal identity flow within a sensorimotor circuit. Results demonstrate that while neural patterns in both primary somatosensory (S1) and ventral pre-motor (PMv) cortices can be used to detect and discriminate between stimuli, they follow different time-courses. Importantly, findings suggest that while information regarding the presence of a stimulus flows reciprocally between S1 and PMv, information regarding stimulus identity is largely contained in S1.
Wang, Y.; Kragel, P. A.; Satpute, A. B.
Show abstract
ABSTRSCTThe extent to which neural representations of fear experience depend on or generalize across the situational context has remained unclear. We systematically manipulated variation within and across three distinct fearevocative situations including fear of heights, spiders, and social threats. Participants (n=21, 10 females and 11 males) viewed 20 second clips depicting spiders, heights, or social encounters, and rated fear after each video. Searchlight multivoxel pattern analysis (MVPA) was used to identify whether and which brain regions carry information that predicts fear experience, and the degree to which the fear-predictive neural codes in these areas depend upon or generalize across the situations. The overwhelming majority of brain regions carrying information about fear did so in a situation dependent manner. These findings suggest that local neural representations of fear experience are unlikely to involve a singular pattern, but rather a collection of multiple heterogeneous brain states
Meng, Q.; Li Hegner, Y.; Giblin, I.; McMahon, C.; Johnson, B. W.
Show abstract
Neural activity has been shown to track hierarchical linguistic units in connected speech and these responses can be directly modulated by changes in speech intelligibility caused by spectral degradation. In the current study, we manipulate prior knowledge to increase the intelligibility of physically identical speech sentences and test the hypothesis that the tracking responses can be enhanced by this intelligibility improvement. Cortical magnetoencephalography (MEG) responses to intelligible speech followed by either the same (matched) or different (unmatched) unintelligible speech were measured in twenty-three normal hearing participants. Driven by prior knowledge, cortical coherence to "abstract" linguistic units with no accompanying acoustic cues (phrases and sentences) was enhanced relative to the unmatched condition, and was lateralized to the left hemisphere. In contrast, cortical responses coherent to word units, aligned with acoustic onsets, were bilateral and insensitive to contextual information changes. No such coherence changes were observed when prior experience was not available (unintelligible speech before intelligible speech). This dissociation suggests that cerebral responses to linguistic information are directly affected by intelligibility, which in turn are powerfully shaped by physical cues in speech. These results provide an objective and sensitive neural index of speech intelligibility, and explain why previous studies have reported no effect of prior knowledge on cortical entrainment.
Fulvio, J. M.; Postle, B. R.
Show abstract
The flexible control of working memory (WM) requires prioritizing immediately task-relevant information while maintaining information with potential future relevance in a deprioritized state. Using double-serial retrocuing (DSR) with simultaneous EEG recording, we investigated how single pulses of transcranial magnetic stimulation (spTMS) to right intraparietal sulcus impacts neural representations of unprioritized memory items (UMI), relative to irrelevant memory items (IMI) that are no longer needed for the trial. Twelve human participants (8 female) performed DSR plus a single-retrocue task, while spTMS was delivered during delay periods. Multivariate pattern analysis revealed that spTMS restored decodability of the UMI concurrent with stimulation, and that of the IMI several timesteps later, after the evoked effects of spTMS were no longer present in the EEG signal. This effect was carried by the alpha (8-13 Hz) and low-beta (13-20 Hz) frequency bands. Analyses of the raw EEG signal showed two effects selective to the epoch containing the UMI: the retrocue and spTMS each produced phase shifts in the low-beta band. These findings demonstrate that deprioritization involves active neural mechanisms distinct from the processing of the IMI, and that these are supported by low-beta oscillatory dynamics in parietal cortex. We hypothesize that the mechanism underlying spTMS-triggered involuntary retrieval of the UMI is the disruption of the encoding of priority status, which may depend on oscillatory dynamics in the low-beta band. Significance StatementThis study provides key insights into how the brain maintains information at different levels of priority in working memory. Importantly, it shows how deprioritizing information in working memory is different from simply "dropping" it. By combining transcranial magnetic stimulation with high temporal resolution measurement of neural signals (with EEG), we replicate previous findings that spTMS triggers the involuntary retrieval of previously unprioritized information and provide new insight into how it does this. Specifically, at the level of EEG dynamics, spTMS is shown to uniquely alter oscillatory dynamics in the low-beta band.
Kryklywy, J.; Ehlers, M. R.; Beukers, A. O.; Moore, S. R.; Todd, R.; Anderson, A. K.
Show abstract
Emotion is typically understood to be an internal subjective experience originating in the brain. Yet in the somatosensory system hedonic information is coded by mechanoreceptors at the point of sensory contact before it reaches the central nervous system. It remains unknown, however, how these distinct peripheral channels for tactile hedonic information contribute to representations of interoceptive states relative to exteroceptive experience. In this fMRI study we applied representational similarity analyses with pattern component modeling, a technique that deconstructs representational states into a weighted set of distinct predefined constructs, to dissociate how discriminatory vs. hedonic tactile information, carried by A- and C-/CT-fibers respectively, contributes to population code representations in the human brain. Results demonstrated that information about appetitive and aversive tactile sensation is represented separately from non-hedonic tactile information across cortical structures. Specifically, although hedonic touch originates as a peripheral signal, labeled at the point of contact, representations in somatosensory cortices are guided by experiences of non-hedonic touch, By contrast, representations in regions associated with interoception and affect encode signals of hedonic touch. This provides evidence of complex tactile encoding that involves both external-exteroceptive and internal-interoceptive dimensions. Importantly, hedonic touch contributes to representations of internal state as well as those of externally generated stimulation.
Nicholson, M.; Wood, R. J.; Fletcher, J.; Gonsalvez, D. G.; Hannan, A. J.; Murray, S. S.; Xiao, J.
Show abstract
Oligodendrocyte production and central nervous system (CNS) myelination is a protracted process, extending into adulthood. While stimulation of neuronal circuits has been shown to enhance oligodendrocyte production and myelination during development, the extent to which physiological stimuli induces activity-dependent plasticity within oligodendrocytes and myelin is unclear, particularly in the adult CNS. Here, we find that using environmental enrichment (EE) to physiologically stimulate neuronal activity for 6-weeks during young adulthood in C57Bl/6 mice results in an enlargement of callosal axon diameters, with a corresponding increase in thickness of pre-existing myelin sheaths. Additionally, EE uniformly promotes the direct differentiation of pre-existing oligodendroglia in both corpus callosum and somatosensory cortex, while differentially impeding OPC homeostasis in these regions. Furthermore, results of this study indicate that physiologically relevant stimulation in young adulthood exerts little influence upon the de novo generation of new myelin sheaths on previously unmyelinated segments and does not enhance OPC proliferation. Rather in this context, activity-dependent plasticity involves the coincident structural remodeling of axons and pre-existing myelin sheaths and increases the direct differentiation of pre-existing oligodendroglia, implying constraints on maximal de novo production in the adult CNS. Together, our findings of myelinated axon remodeling and increased pre-existing oligodendroglial differentiation constitute a previously undescribed form of adaptive myelination that likely contributes to neuronal circuit maturation and the maintenance of optimum cognitive function in the young adult CNS. Main pointsO_LIEnvironmental enrichment induces the plasticity of myelinated axons, resulting in axon caliber enlargement and increased thickness of pre-existing myelin sheaths C_LIO_LIEnvironmental enrichment increases the direct differentiation of pre-existing oligodendroglia C_LIO_LIEnvironmental enrichment alters OPC homeostasis C_LI
Acar, K.; Smith, M. A.
Show abstract
The locus coeruleus (LC) is the primary source of norepinephrine in the brain and has been implicated in the processes of attention, arousal, and perceptual decision making. Although prior work has linked transient LC activation to both sensory stimulus processing and motor processing, the precise contribution of LC to the distinct sensory and motor components of perceptual decisions remains unclear. Here, we recorded the spiking activity of single LC neurons in rhesus macaques while they performed a visual two-alternative forced-choice change detection task with a saccadic report, designed to cleanly dissociate sensory and motor contributions to LC activity. We found that the large majority of recorded neurons showed robust increases in response tightly locked to the choice saccade, while only a small fraction showed significant responses to the visual stimuli. Saccade-aligned LC responses did not vary with behavioral outcome, perceptual difficulty, reaction time, or session-wide fluctuations in perceptual sensitivity and criterion, indicating that LC motor-related signals were dissociated from perceptual performance. Together, these results demonstrated the existence of a subpopulation of LC neurons whose activity was tightly coupled to oculomotor output across both voluntary and involuntary eye movements during perceptual decision making, but were independent of perceptual decision accuracy. Our findings support a role for LC in facilitating motor preparation and execution in response to behaviorally significant sensory events.